Cs-621 Theory Gems 1 Learning Non-linear Classifiers

نویسنده

  • Alhussein Fawzi
چکیده

In the previous lectures, we have focused on finding linear classifiers, i.e., ones in which the decision boundary is a hyperplane. However, in many scenarios the data points cannot be really classified in this manner, as there simply might be no hyperplane that separates most of the positive examples from the negative ones see, e.g., Figure 1 (a). Clearly, in such situations one needs to resort to more complex (non-linear) classifiers and thus one would expect that there is no use here for the linear classification algorithms we developed so far. Fortunately, as we will see in this lecture, this is not really the case as there actually are powerful and convenient ways of performing a non-linear classification by building on the algorithms for the linear one. In particular, we will see two very useful and quite broadly-applicable techniques: the Kernel Trick (or just kernelization) and boosting.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Cs-621 Theory Gems

That is, we want the hyperplane corresponding to (w, θ) to separate the positive examples from the negative ones. As we already argued previously, wlog we can constrain ourselves to the case when θ = 0 (i.e., the hyperplane passes through the origin) and there are only positive examples (i.e., l = 1, for all j). Last time we presented a simple algorithm for this problem called Perceptron algori...

متن کامل

Cs-621 Theory Gems

In Lecture 10, we introduced a fundamental object of spectral graph theory: the graph Laplacian, and established some of its basic properties. We then focused on the task of estimating the value of eigenvalues of Laplacians. In particular, we proved the Courant-Fisher theorem that is instrumental in obtaining upper-bounding estimates on eigenvalues. Today, we continue by showing a technique – s...

متن کامل

Cs-621 Theory Gems

Today, we will briefly discuss an important technique in probability theory – measure concentration. Roughly speaking, measure concentration corresponds to exploiting a phenomenon that some functions of random variables are highly concentrated around their expectation/median. The main example that will be of our interest here is Johnson-Lindenstrauss (JL) lemma. The JL lemma is a very powerful ...

متن کامل

Cs-621 Theory Gems 2 the Stock Market Model

The main topic of this lecture is the learning-from-expert-advice framework. Our goal here is to be able to predict, as accurately as possible, a sequence of events in a situation when our only information about the future is coming from recommendations of a set of “experts”. The key feature (and difficulty) of this scenario is that most – if not all – of these experts (and thus their recommend...

متن کامل

Application of ensemble learning techniques to model the atmospheric concentration of SO2

In view of pollution prediction modeling, the study adopts homogenous (random forest, bagging, and additive regression) and heterogeneous (voting) ensemble classifiers to predict the atmospheric concentration of Sulphur dioxide. For model validation, results were compared against widely known single base classifiers such as support vector machine, multilayer perceptron, linear regression and re...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012